48 research outputs found

    Distributed accelerated Nash equilibrium learning for two-subnetwork zero-sum game with bilinear coupling

    Get PDF
    summary:This paper proposes a distributed accelerated first-order continuous-time algorithm for O(1/t2)O({1}/{t^2}) convergence to Nash equilibria in a class of two-subnetwork zero-sum games with bilinear couplings. First-order methods, which only use subgradients of functions, are frequently used in distributed/parallel algorithms for solving large-scale and big-data problems due to their simple structures. However, in the worst cases, first-order methods for two-subnetwork zero-sum games often have an asymptotic or O(1/t)O(1/t) convergence. In contrast to existing time-invariant first-order methods, this paper designs a distributed accelerated algorithm by combining saddle-point dynamics and time-varying derivative feedback techniques. If the parameters of the proposed algorithm are suitable, the algorithm owns O(1/t2)O(1/t^2) convergence in terms of the duality gap function without any uniform or strong convexity requirement. Numerical simulations show the efficacy of the algorithm
    corecore